本文研究了一个开放的研究问题,即生成文本图像对,以改善细粒度对文本跨模式检索任务的训练,并通过发现stylegan2模型的隐藏语义信息,为配对数据增强的新颖框架提出了一个新颖的框架。 。具体来说,我们首先在给定数据集上训练stylegan2模型。然后,我们将真实图像投影回stylegan2的潜在空间,以获取潜在的代码。为了使生成的图像可操作,我们进一步引入了一个潜在的空间对齐模块,以了解StyleGAN2潜在代码和相应的文本字幕功能之间的对齐。当我们进行在线配对数据增强时,我们首先通过随机代码替换生成增强文本,然后将增强文本传递到潜在的空间对齐模块中以输出潜在代码,最终将其馈送到stylegan2以生成增强图像。我们评估了增强数据方法对两个公共跨模式检索数据集的功效,其中有希望的实验结果表明,可以将增强的文本图像对数据与原始数据一起训练,以增强图像到文本交叉 - 模态检索性能。
translated by 谷歌翻译
在本文中,我们调查了一项开放的研究任务,该任务是从单个2D GAN产生人体面部且没有3D监督的3D卡通面部形状,在那里我们还可以操纵3D形状的面部表情。为此,我们发现了Stylegan潜在空间的语义含义,因此我们能够通过控制潜在代码来产生各种表达式,姿势和照明的面部图像。具体而言,我们首先对卡通数据集中预验证的Stylegan脸部模型进行了修复。通过将相同的潜在代码喂入面部和卡通生成模型,我们的目标是实现从2D人脸图像到卡通风格的化身的翻译。然后,我们发现了甘恩潜在空间的语义方向,以试图在保留原始身份的同时改变面部表情。由于我们没有任何针对卡通脸的3D注释,因此我们操纵潜在代码以生成具有不同姿势和照明的图像,以便我们可以重建3D卡通脸部形状。我们在定性和定量上验证了方法在三个卡通数据集上的疗效。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
自动驾驶技术的加速开发对获得大量高质量数据的需求更大。标签,现实世界数据代表性是培训深度学习网络的燃料,对于改善自动驾驶感知算法至关重要。在本文中,我们介绍了PANDASET,由完整的高精度自动车辆传感器套件生产的第一个数据集,具有无需成本商业许可证。使用一个360 {\ DEG}机械纺丝利达,一个前置,远程LIDAR和6个摄像机收集数据集。DataSet包含100多个场景,每个场景为8秒,为目标分类提供28种类型的标签和37种类型的语义分割标签。我们提供仅限LIDAR 3D对象检测的基线,LIDAR-Camera Fusion 3D对象检测和LIDAR点云分割。有关Pandaset和开发套件的更多详细信息,请参阅https://scale.com/open-datasets/pandaset。
translated by 谷歌翻译
时间序列数据的生成和分析与许多从经济学到流体力学的定量字段相关。在物理科学中,诸如亚稳态和连贯的组的结构,慢松弛过程,集体变量显性过渡途径或歧管流动流动的概率流动可能非常重视理解和表征系统的动力动力学和机械性质。 Deeptime是一种通用Python库,提供各种工具来估计基于时间序列数据的动态模型,包括传统的线性学习方法,例如马尔可夫状态模型(MSM),隐藏的马尔可夫模型和Koopman模型,以及内核和深度学习方法如vampnets和深msms。该库主要兼容Scikit-Searn,为这些不同的模型提供一系列估计器类,但与Scikit-Ge劳说相比,还提供了深度模型类,例如,在MSM的情况下,提供了多种分析方法来计算有趣的热力学,动力学和动态量,例如自由能,松弛时间和过渡路径。图书馆专为易于使用而设计,而且易于维护和可扩展的代码。在本文中,我们介绍了Deeptime软件的主要特征和结构。
translated by 谷歌翻译
食物对人类日常生活很重要。在本文中,我们有兴趣学习长期食谱的结构表现形式,这些食谱可以使食谱生成和食品跨模式检索任务受益。与常见的视觉数据不同,这里的食物图像包含混合成分和目标食谱是漫长的段落,在那里我们没有关于结构信息的注释。为了解决上述局限性,我们提出了一种新颖的方法,可以毫无根据地学习烹饪食谱的句子级树结构。我们的方法在系统的框架中汇集了一些新颖的想法:(1)利用一种无监督的学习方法来在训练前获得句子级的树结构标签; (2)通过从(1)中学到的树结构标签的监督从图像中生成目标食谱的树; (3)将学习的树结构整合到食谱生成和食品交叉模式检索过程中。我们提出的模型可以生成优质的句子级别的树结构和连贯的食谱。我们在基准配方1M数据集上实现了最先进的食谱生成和食品交叉模式检索性能。
translated by 谷歌翻译
视频字幕定位目标将复杂的视觉内容解释为文本说明,这要求模型充分了解包括对象及其交互的视频场景。流行的方法采用现成的对象检测网络来提供对象建议,并使用注意机制来建模对象之间的关系。他们通常会错过一些预验证模型的不确定语义概念,并且无法识别对象之间的确切谓词关系。在本文中,我们研究了为给定视频生成文本描述的开放研究任务,并提出了带有元概念的跨模式图(CMG)。具体而言,为了涵盖视频字幕中有用的语义概念,我们弱地学习了文本描述的相应视觉区域,其中相关的视觉区域和文本单词被命名为跨模式元概念。我们通过学习的跨模式元概念动态地构建元概念图。我们还构建了整体视频级别和本地框架级视频图,并具有预测的谓词,以建模视频序列结构。我们通过广泛的实验来验证我们提出的技术的功效,并在两个公共数据集上实现最新结果。
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
Image Virtual try-on aims at replacing the cloth on a personal image with a garment image (in-shop clothes), which has attracted increasing attention from the multimedia and computer vision communities. Prior methods successfully preserve the character of clothing images, however, occlusion remains a pernicious effect for realistic virtual try-on. In this work, we first present a comprehensive analysis of the occlusions and categorize them into two aspects: i) Inherent-Occlusion: the ghost of the former cloth still exists in the try-on image; ii) Acquired-Occlusion: the target cloth warps to the unreasonable body part. Based on the in-depth analysis, we find that the occlusions can be simulated by a novel semantically-guided mixup module, which can generate semantic-specific occluded images that work together with the try-on images to facilitate training a de-occlusion try-on (DOC-VTON) framework. Specifically, DOC-VTON first conducts a sharpened semantic parsing on the try-on person. Aided by semantics guidance and pose prior, various complexities of texture are selectively blending with human parts in a copy-and-paste manner. Then, the Generative Module (GM) is utilized to take charge of synthesizing the final try-on image and learning to de-occlusion jointly. In comparison to the state-of-the-art methods, DOC-VTON achieves better perceptual quality by reducing occlusion effects.
translated by 谷歌翻译